54 research outputs found

    Integration von Therapieplanung und standardisierter Dokumentation – Ergebnisse aus der Entwicklung und Einführung eines rechnerbasierten Anwendungssystems der Pädiatrischen Onkologie

    Get PDF
    Die Pädiatrische Onkologie und Hämatologie ist gekennzeichnet durch eine relativ niedrige Inzidenzrate und einer damit verbundenen niedrigen Fallzahl der verschiedenartigen onkologischen und hämatologischen Krankheiten. Hieraus resultiert, dass in einer einzelnen Klinik nur wenig Erfahrungswissen bei Diagnostik und Therapie einer bestimmten Krankheit vorliegen kann. An der Behandlung eines Patienten in der Pädiatrischen Onkologie sind zudem relativ viele Personen, Personengruppen und Einrichtungen wie Referenzzentren beteiligt, die – als multiprofessionelles Behandlungsteam - untereinander im Sinne einer ganzheitlichen Therapie kommunizieren müssen. Trotz der niedrigen Fallzahl ist Krebs die zweithäufigste Todesursache im Kindesalter . Bei Krebserkrankungen im Kindesalter werden heutzutage in Deutschland dennoch recht gute Heilungschancen erreicht. Einen entscheidenden Beitrag hierzu haben seit den 70er Jahren multizentrische Therapieoptimierungsstudien geleistet. In den von den Studienzentralen dieser Therapieoptimierungsstudien herausgegebenen Therapieprotokollen wird eine qualitativ hochwertige und dem aktuellen Stand der Wissenschaft entsprechende Therapie definiert. Schwerpunkt dieser Behandlung ist in den meisten Fällen die Chemotherapie. Die Chemotherapieplanung für Kinder gestaltet sich äußerst komplex und aufwändig. Aufgrund der hohen Toxizität dieser Therapien kann ein Fehler in einem Therapieablaufplan zu schweren Akuttoxizitäten und Langzeitfolgen führen, weshalb Fehler unbedingt zu vermeiden sind. Darüber hinaus erfordert die Kooperation der Kliniken mit den Studienzentralen einen sehr hohen Dokumentationsaufwand. Um die benötigten Daten zur Verfügung zu stellen, ist in den Kliniken ein hoher Aufwand zu betreiben, der durch uneinheitliche Dokumentation erschwert wird. Zur Unterstützung dieser multizentrischen Umgebung war es Ziel, (i) ein Dokumentations- und Chemotherapieplanungssystem für die Pädiatrische Onkologie (DOSPO-Kernsystem) zu entwickeln, einzuführen und zu pflegen, (ii) einen Terminologieserver für die Pädiatrische Onkologie zu entwickeln und (iii) ein generisches Werkzeug (Modulgenerator) zur Erstellung von Studiendatenbanken und studienspezifischen Modulen für das DOSPO-Kernsystem auf Basis der Terminologie des Terminologieservers zu entwickeln. In dem DOSPO-Kernsystem wurde hierzu der Basisdatensatz der Pädiatrischen Onkologie umgesetzt. Neben der Dokumentation dieser Daten werden Funktionen zur Chemotherapie-Planung, Berichtschreibung, etc. bereit gestellt. Für die Dokumentation studienspezifischer Daten können studienspezifische Module entwickelt werden, die in das DOSPO-Kernsystem integriert werden. Um die Studienzentralen bei dieser Aufgabe zu unterstützen wird ein generisches Werkzeug erarbeitet. Dieses Werkzeug basiert auf dem Terminologieserver, in dem alle Merkmale der Therapiestudien der Pädiatrischen Onkologie in Deutschland standardisiert abzulegen sind. Ziel dieses Berichts ist es, einen Überblick über die Ergebnisse aus Entwicklung und Einführung des Anwendungssystems DOSPO im Hinblick auf die Integration von Therapieplanung und standardisierter Dokumentation zu geben

    A process model for acquiring international administrative routine data for health services research

    Get PDF
    Objectives: To describe a practical and standardized approach for acquiring international administrative routine data from different data owners for research.Methods: Best practice approach based on the experiences gained during the EU-funded ADVOCATE ("Added Value for Oral Care") project that involved the collection of routinely collected administrative data from health insurance providers, health funds or health authorities in six European countries.Results: A general process for data acquisition that contains four phases was developed: First, the conditions for data usage and access are determined. These conditions are subsequently tested by sharing and analyzing a data sample (quality and validity audit). After optimizing the process model, full-scale data access and analysis are performed.Conclusions: The general data acquisition approach has successfully been applied in the ADVOCATE project to acquire claims data from eight data owners, which prescribed different usage conditions in each case. The approach aims to make a contribution to a standardized process model for acquiring administrative routine data for research and providing researchers with a methodological framework.Ziel: Konzeption eines anwendbaren und standardisierten Ansatzes zur Akquise internationaler administrativer Routinedaten von verschiedenen Dateneigentümern für die Forschung.Methoden: Best-Practice-Ansatz auf Grundlage der Erfahrungen aus dem EU-Projekt ADVOCATE (Added Value for Oral Care), bei dem administrative Routinedaten von Krankenkassen, Krankenversicherungen oder Gesundheitsbehörden aus sechs europäischen Ländern erhoben wurden.Ergebnisse: Es wurde ein allgemeines, vierstufiges Verfahren zur Datenakquise entwickelt: Zunächst werden die Bedingungen für die Datennutzung und den Datenzugriff festgelegt. Diese Bedingungen werden anschließend durch den Austausch und die Analyse einer Stichprobe mit anschließender Datenqualitätsprüfung getestet. Nach der Optimierung des Prozesses erfolgt der vollständige Datenzugriff und die Analyse.Schlussfolgerungen: Das entwickelte Verfahren zur Datenakquise wurde erfolgreich im ADVOCATE- Projekt angewandt, um administrative Routinedaten von acht Dateneigentümern zu akquirieren, die jeweils unterschiedliche Nutzungsbedingungen vorschrieben. Ziel des Ansatzes ist es, einen Beitrag zu einem standardisierten Verfahren zur Akquise von Routinedaten für die Forschung zu leisten

    Information management for enabling systems medicine

    No full text
    Systems medicine is a data-oriented approach in research and clinical practice to support study and treatment of complex diseases. It relies on well-defined information management processes providing comprehensive and up to date information as basis for electronic decision support. The authors suggest a three-layer information technology (IT) architecture for systems medicine and a cyclic data management approach including a knowledge base that is dynamically updated by extract, transform, and load (ETL) procedures. Decision support is suggested as case-based and rule-based components. Results are presented via a user interface to acknowledging clinical requirements in terms of time and complexity. The systems medicine application was implemented as a prototype

    Requirements for data integration platforms in biomedical research networks: a reference model

    No full text
    Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper

    Systematic planning of patient records for cooperative care and multicenter research

    No full text
    Purpose: The purpose of this paper is to introduce a method for systematically planning patient records for structured data entry that can be used in cooperative environments (e.g. cooperative care, multicenter trials) in a way that enables multipurpose use and shared data entry. Methods: Design research, formal logic. Results: The method suggests five steps: analyze the prevailing documentation infrastructure,provide terminology management system (TMS), provide documentation management system (DMS), plan the logical architecture, provide all necessary tools. Conclusions: The era of eHealth enables cooperative care and collaborative documentation.This can only be efficient if a multiple use and shared entry of data is realized. The task of the medical informatics community is to plan these environments systematically especiallyin complex environments which are enabled by emerging technologies

    Sensor-Based Measurements in Paraplegia: Classified References from a Systematic Review

    No full text
    This dataset contains the results (publication references) of the systematic review "Current Use of Sensor-Based Measurements for Paraplegics", presented at MIE 2020, Geneva

    Assessment of automatically exported clinical data from a hospital information system for clinical research in multiple myeloma

    No full text
    PURPOSE: An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. METHODS: Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. RESULTS: The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. CONCLUSION: The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol

    Preparing the electronic patient record for collaborative environments and ehealth

    No full text
    In the era of eHealth the electronic patient record is increasingly regarded as part of a collaborative environment. To efficiently support the documentary tasks and analyses a cooperative documentation infrastructure which allows multiple use and shared entry of data is necessary. The objective of this paper is to introduce a method for systematically planning such a cooperative documentation environment. It consists of the steps: analyse the prevailing documentation infrastructure, provide terminology, provide documentation management, plan the logical architecture and provide all necessary tools. The steps can be formally specified so that parameters can be automatically controlled and the environment can be updated more easily

    Graph-Representation of Patient Data: a Systematic Literature Review

    No full text
    Graph theory is a well-established theory with many methods used in mathematics to study graph structures. In the field of medicine, electronic health records (EHR) are commonly used to store and analyze patient data. Consequently, it seems straight-forward to perform research on modeling EHR data as graphs. This systematic literature review aims to investigate the frontiers of the current research in the field of graphs representing and processing patient data. We want to show, which areas of research in this context need further investigation. The databases MEDLINE, Web of Science, IEEE Xplore and ACM digital library were queried by using the search terms health record, graph and related terms. Based on the 'Preferred Reporting Items for Systematic Reviews and Meta-Analysis' (PRISMA) statement guidelines the articles were screened and evaluated using full-text analysis. Eleven out of 383 articles found in systematic literature review were finally included for analysis in this literature review. Most of them use graphs to represent temporal relations, often representing the connection among laboratory data points. Only two papers report that the graph data were further processed by comparing the patient graphs using similarity measurements. Graphs representing individual patients are hardly used in research context, only eleven papers considered such kind of graphs in their investigations. The potential of graph theoretical algorithms, which are already well established, could help increasing this research field, but currently there are too few papers to estimate how this area of research will develop. Altogether, the use of such patient graphs could be a promising technique to develop decision support systems for diagnosis, medication or therapy of patients using similarity measurements or different kinds of analysis

    Electronic patient records moving from islands and bridges towards electronic health records for continuity of care /

    No full text
    Objectives: Electronic patient record (EPR) systems are increasingly used and have matured sufficiently so as to contribute to high quality care and efficient patient management. Our objective is to summarize current trends and major achievements in the field of EPRin the last year and to discuss their future prospects. Results: Integrating health data from a variety of sources in a comprehensive EPR is a major prerequisite for e-health and eresearch. Current research continues to elaborate architectures, technologies and security concepts. To achieve semantic interoperability standards are developed on different levels, including basic data types, messages, services, architectures, terminologies,ontologies, scope and presentation of EPR content. Standards development organisations have started to harmonize their work to arrive at a consensus standard for EPR systems. Integrating the health care enterprise as a whole will optimize efficient use of resources, logistics and scheduling. Conclusions: The past few years have seen a myriad of developments of EPR systems. However, it is still a long way, until EPR systems can flexibly fulfill all user requirements and an EHR will become broadly accepted. Semantic interoperability will be a key to successful EPRuse, especially to avoid double data entries and to better integrate data recording within local workflows. The patient will become an empowered partner, not only by giving him access to his health data. All this will result in enormous quantities of data. Thus, time has come to determine how relevant data can be presented to the stakeholders adequately
    • …
    corecore